Neural networks in virtual reference tuning
نویسندگان
چکیده
منابع مشابه
NONLINEAR VIRTUAL REFERENCE FEEDBACK TUNING Application of Neural Networks to Direct Controller Design
Abstract: Virtual Reference Feedback Tuning (VRFT) is a direct controller design methodology which can be applied both in the linear and nonlinear case. In this work, neural network controllers have been designed following nonlinear VRFT principles, conforming what could be considered a particular scheme of direct neural Model Reference Control. The approach has been applied to a simulated cran...
متن کاملSensitivity shaping via Virtual Reference Feedback Tuning
The Virtual Reference Feedback Tuning (VRFT) is a data based method for the design of feedback controllers. In previous papers, the VRFT has been presented for the solution of the one degree of freedom model-reference control problem in which the objective is to shape the I/O transfer function of the control system. In this paper, the VRFT approach is extended so that it can be used for the sha...
متن کاملVirtual Reference Feedback Tuning for Industrial PID Controllers
In this paper, we propose a data-based auto-tuning method for industrial PID controllers, which does not rely on a model of the plant. The method is inspired by the Virtual Reference Feedback Tuning approach for data-based controller tuning, but it is taylored to the framework of PID controller design. The method is entirely developed in a deterministic, continuous time setting, where the assum...
متن کاملrodbar dam slope stability analysis using neural networks
در این تحقیق شبکه عصبی مصنوعی برای پیش بینی مقادیر ضریب اطمینان و فاکتور ایمنی بحرانی سدهای خاکی ناهمگن ضمن در نظر گرفتن تاثیر نیروی اینرسی زلزله ارائه شده است. ورودی های مدل شامل ارتفاع سد و زاویه شیب بالا دست، ضریب زلزله، ارتفاع آب، پارامترهای مقاومتی هسته و پوسته و خروجی های آن شامل ضریب اطمینان می شود. مهمترین پارامتر مورد نظر در تحلیل پایداری شیب، بدست آوردن فاکتور ایمنی است. در این تحقیق ...
Feature Weight Tuning for Recursive Neural Networks
This paper addresses how a recursive neural network model can automatically leave out useless information and emphasize important evidence, in other words, to perform “weight tuning” for higher-level representation acquisition. We propose two models, Weighted Neural Network (WNN) and Binary-Expectation Neural Network (BENN), which automatically control how much one specific unit contributes to ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Engineering Applications of Artificial Intelligence
سال: 2011
ISSN: 0952-1976
DOI: 10.1016/j.engappai.2011.04.003